Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 394
Filter
1.
Rev. bras. oftalmol ; 83: e0006, 2024. tab, graf
Article in Portuguese | LILACS-Express | LILACS | ID: biblio-1535603

ABSTRACT

RESUMO Objetivo: Obter imagens de fundoscopia por meio de equipamento portátil e de baixo custo e, usando inteligência artificial, avaliar a presença de retinopatia diabética. Métodos: Por meio de um smartphone acoplado a um dispositivo com lente de 20D, foram obtidas imagens de fundo de olhos de pacientes diabéticos; usando a inteligência artificial, a presença de retinopatia diabética foi classificada por algoritmo binário. Resultados: Foram avaliadas 97 imagens da fundoscopia ocular (45 normais e 52 com retinopatia diabética). Com auxílio da inteligência artificial, houve acurácia diagnóstica em torno de 70 a 100% na classificação da presença de retinopatia diabética. Conclusão: A abordagem usando dispositivo portátil de baixo custo apresentou eficácia satisfatória na triagem de pacientes diabéticos com ou sem retinopatia diabética, sendo útil para locais sem condições de infraestrutura.


ABSTRACT Introduction: To obtain fundoscopy images through portable and low-cost equipment using artificial intelligence to assess the presence of DR. Methods: Fundus images of diabetic patients' eyes were obtained by using a smartphone coupled to a device with a 20D lens. By using artificial intelligence (AI), the presence of DR was classified by a binary algorithm. Results: 97 ocular fundoscopy images were evaluated (45 normal and 52 with DR). Through AI diagnostic accuracy around was 70% to 100% in the classification of the presence of DR. Conclusion: The approach using a low-cost portable device showed satisfactory efficacy in the screening of diabetic patients with or without diabetic retinopathy, being useful for places without infrastructure conditions.

2.
Arq. bras. oftalmol ; 87(5): e2022, 2024. tab, graf
Article in English | LILACS-Express | LILACS | ID: biblio-1527853

ABSTRACT

ABSTRACT Purpose: This study aimed to evaluate the classification performance of pretrained convolutional neural network models or architectures using fundus image dataset containing eight disease labels. Methods: A publicly available ocular disease intelligent recognition database has been used for the diagnosis of eight diseases. This ocular disease intelligent recognition database has a total of 10,000 fundus images from both eyes of 5,000 patients for the following eight diseases: healthy, diabetic retinopathy, glaucoma, cataract, age-related macular degeneration, hypertension, myopia, and others. Ocular disease classification performances were investigated by constructing three pretrained convolutional neural network architectures including VGG16, Inceptionv3, and ResNet50 models with adaptive moment optimizer. These models were implemented in Google Colab, which made the task straight-forward without spending hours installing the environment and supporting libraries. To evaluate the effectiveness of the models, the dataset was divided into 70%, 10%, and 20% for training, validation, and testing, respectively. For each classification, the training images were augmented to 10,000 fundus images. Results: ResNet50 achieved an accuracy of 97.1%; sensitivity, 78.5%; specificity, 98.5%; and precision, 79.7%, and had the best area under the curve and final score to classify cataract (area under the curve = 0.964, final score = 0.903). By contrast, VGG16 achieved an accuracy of 96.2%; sensitivity, 56.9%; specificity, 99.2%; precision, 84.1%; area under the curve, 0.949; and final score, 0.857. Conclusions: These results demonstrate the ability of the pretrained convolutional neural network architectures to identify ophthalmological diseases from fundus images. ResNet50 can be a good architecture to solve problems in disease detection and classification of glaucoma, cataract, hypertension, and myopia; Inceptionv3 for age-related macular degeneration, and other disease; and VGG16 for normal and diabetic retinopathy.


RESUMO Objetivo: Avaliar o desempenho de classificação de modelos ou arquiteturas de rede neural convolucional pré--treinadas usando um conjunto de dados de imagem de fundo de olho contendo oito rótulos de doenças diferentes. Métodos: Neste artigo, o conjunto de dados de reconhecimento inteligente de doenças oculares publicamente disponível foi usado para o diagnóstico de oito rótulos de doenças diferentes. O banco de dados de reconhecimento inteligente de doenças oculares tem um total de 10.000 imagens de fundo de olho de ambos os olhos de 5.000 pacientes para oito categorias que contêm rótulos saudáveis, retinopatia diabética, glaucoma, catarata, degeneração macular relacionada à idade, hipertensão, miopia, outros. Investigamos o desempenho da classificação de doenças oculares construindo três arquiteturas de rede neural convolucional pré-treinadas diferentes, incluindo os modelos VGG16, Inceptionv3 e ResNet50 com otimizador de Momento Adaptativo. Esses modelos foram implementados no Google Colab o que facilitou a tarefa sem gastar horas instalando o ambiente e suportando bibliotecas. Para avaliar a eficácia dos modelos, o conjunto de dados é dividido em 70% para treinamento, 10% para validação e os 20% restantes utilizados para teste. As imagens de treinamento foram expandidas para 10.000 imagens de fundo de olho para cada tal. Resultados: Observou-se que o modelo ResNet50 alcançou acurácia de 97,1%, sensibilidade de 78,5%, especificidade de 98,5% e precisão de 79,7% e teve a melhor área sob a curva e pontuação final para classificar a categoria da catarata (área sob a curva=0,964, final=0,903). Em contraste, o modelo VGG16 alcançou uma precisão de 96,2%, sensibilidade de 56,9%, especificidade de 99,2% e precisão de 84,1%, área sob a curva 0,949 e pontuação final de 0,857. Conclusão: Esses resultados demonstram a capacidade das arquiteturas de rede neural convolucional pré-treinadas em identificar doenças oftalmológicas a partir de imagens de fundo de olho. ResNet50 pode ser uma boa solução para resolver problemas na detecção e classificação de doenças como glaucoma, catarata, hipertensão e miopia; Inceptionv3 para degeneração macular relacionada à idade e outras doenças; e VGG16 para retinopatia normal e diabética.

3.
Chinese Journal of Clinical Thoracic and Cardiovascular Surgery ; (12): 145-152, 2024.
Article in Chinese | WPRIM | ID: wpr-1006526

ABSTRACT

@#Lung adenocarcinoma is a prevalent histological subtype of non-small cell lung cancer with different morphologic and molecular features that are critical for prognosis and treatment planning. In recent years, with the development of artificial intelligence technology, its application in the study of pathological subtypes and gene expression of lung adenocarcinoma has gained widespread attention. This paper reviews the research progress of machine learning and deep learning in pathological subtypes classification and gene expression analysis of lung adenocarcinoma, and some problems and challenges at the present stage are summarized and the future directions of artificial intelligence in lung adenocarcinoma research are foreseen.

4.
Journal of Prevention and Treatment for Stomatological Diseases ; (12): 43-49, 2024.
Article in Chinese | WPRIM | ID: wpr-1003443

ABSTRACT

Objective@#To research the effectiveness of deep learning techniques in intelligently diagnosing dental caries and periapical periodontitis and to explore the preliminary application value of deep learning in the diagnosis of oral diseases@*Methods@#A dataset containing 2 298 periapical films, including healthy teeth, dental caries, and periapical periodontitis, was used for the study. The dataset was randomly divided into 1 573 training images, 233 validation images, and 492 test images. By comparing various neural network models, the MobileNetV3 network model with better performance was selected for dental disease diagnosis, and the model was optimized by tuning the network hyperparameters. The accuracy, precision, recall, and F1 score were used to evaluate the model's ability to recognize dental caries and periapical periodontitis. Class activation map was used to visualization analyze the performance of the network model@*Results@#The algorithm achieved a relatively ideal intelligent diagnostic effect with precision, recall, and accuracy of 99.42%, 99.73%, and 99.60%, respectively, and the F1 score was 99.57% for classifying healthy teeth, dental caries, and periapical periodontitis. The visualization of the class activation maps also showed that the network model can accurately extract features of dental diseases.@*Conclusion@#The tooth lesion detection algorithm based on the MobileNetV3 network model can eliminate interference from image quality and human factors and has high diagnostic accuracy, which can meet the needs of dental medicine teaching and clinical applications.

5.
Rev. cuba. inform. méd ; 15(2)dic. 2023.
Article in Spanish | LILACS-Express | LILACS | ID: biblio-1536291

ABSTRACT

En las últimas décadas, las imágenes fotoacústicas han demostrado su eficacia en el apoyo al diagnóstico de algunas enfermedades, así como en la investigación médica, ya que a través de ellas es posible obtener información del cuerpo humano con características específicas y profundidad de penetración, desde 1 cm hasta 6 cm dependiendo en gran medida del tejido estudiado, además de una buena resolución. Las imágenes fotoacústicas son comparativamente jóvenes y emergentes y prometen mediciones en tiempo real, con procedimientos no invasivos y libres de radiación. Por otro lado, aplicar Deep Learning a imágenes fotoacústicas permite gestionar datos y transformarlos en información útil que genere conocimiento. Estas aplicaciones poseen ventajas únicas que facilitan la aplicación clínica. Se considera que con estas técnicas se pueden proporcionar diagnósticos médicos confiables. Es por eso que el objetivo de este artículo es proporcionar un panorama general de los casos donde se combina el Deep Learning con técnicas fotoacústicas.


In recent decades, photoacoustic imaging has proven its effectiveness in supporting the diagnosis of some diseases as well as in medical research, since through them it is possible to obtain information of the human body with specific characteristics and depth of penetration, from 1 cm to 6 cm depending largely on the tissue studied, in addition to a good resolution. Photoacoustic imaging is comparatively young and emerging and promises real-time measurements, with non-invasive and radiation-free procedures. On the other hand, applying Deep Learning to photoacoustic images allows managing data and transforming them into useful information that generates knowledge. These applications have unique advantages that facilitate clinical application. It may be possible with these techniques to provide reliable medical diagnoses. That is why the aim of this article is to provide an overview of cases combining Deep Learning with photoacoustic techniques.

6.
Rev. cuba. inform. méd ; 15(2)dic. 2023.
Article in Spanish | LILACS-Express | LILACS | ID: biblio-1536294

ABSTRACT

El campo de la radiología ha experimentado avances notables en las últimas décadas, con desarrollos que van desde la mejora de la calidad y digitalización de las imágenes hasta la detección asistida por computadora. Particularmente, la aparición de técnicas de Inteligencia Artificial basadas en Deep Learning y Visión Computacional han promovido soluciones innovadoras en el diagnóstico y el análisis radiológico. Se explora la relevancia de los desarrollos y modelos open source en el progreso de estas técnicas, resaltando el impacto que la colaboración y el acceso abierto han tenido en el avance científico del campo. La investigación tiene un enfoque cualitativo, con alcance descriptivo y retrospectivo, de corte longitudinal. Se realizó un análisis documental de la evolución y el impacto del open source en la Radiología, poniendo de relieve la colaboración multidisciplinar. Se examinaron casos de uso, ventajas, desafíos y consideraciones éticas en relación con la implementación de soluciones basadas en Inteligencia Artificial en Radiología. El enfoque open source ha mostrado ser una influencia positiva en la Radiología, con potencial para influir en la atención médica, ofreciendo soluciones más precisas y accesibles. No obstante, se presentan desafíos éticos y técnicos que requieren atención.


The field of radiology has seen notable advances in recent decades, with developments ranging from image quality improvement and digitization to computer-aided detection. Particularly, the emergence of Artificial Intelligence techniques based on Deep Learning and Computer Vision have promoted innovative solutions in diagnosis and radiological analysis. This article explores the relevance of open source developments and models in the progress of these techniques, highlighting the impact that collaboration and open access have had on the scientific advancement in this field. This research has a qualitative approach, with a descriptive, retrospective, longitudinal scope. A documentary analysis of the evolution and impact of open source in Radiology was carried out, highlighting multidisciplinary collaboration. Use cases, advantages, challenges and ethical considerations were also examined in relation to the implementation of AI-based solutions in Radiology. The Open Source approach has been shown to be a positive influence in Radiology, with the potential to influence medical care, offering more precise and accessible solutions. However, there are ethical and technical challenges that require attention.

7.
Colomb. med ; 54(3)sept. 2023.
Article in English | LILACS-Express | LILACS | ID: biblio-1534290

ABSTRACT

This statement revises our earlier "WAME Recommendations on ChatGPT and Chatbots in Relation to Scholarly Publications" (January 20, 2023). The revision reflects the proliferation of chatbots and their expanding use in scholarly publishing over the last few months, as well as emerging concerns regarding lack of authenticity of content when using chatbots. These recommendations are intended to inform editors and help them develop policies for the use of chatbots in papers published in their journals. They aim to help authors and reviewers understand how best to attribute the use of chatbots in their work and to address the need for all journal editors to have access to manuscript screening tools. In this rapidly evolving field, we will continue to modify these recommendations as the software and its applications develop.


Esta declaración revisa las anteriores "Recomendaciones de WAME sobre ChatGPT y Chatbots en Relation to Scholarly Publications" (20 de enero de 2023). La revisión refleja la proliferación de chatbots y su creciente uso en las publicaciones académicas en los últimos meses, así como la preocupación por la falta de autenticidad de los contenidos cuando se utilizan chatbots. Estas recomendaciones pretenden informar a los editores y ayudarles a desarrollar políticas para el uso de chatbots en los artículos sometidos en sus revistas. Su objetivo es ayudar a autores y revisores a entender cuál es la mejor manera de atribuir el uso de chatbots en su trabajo y a la necesidad de que todos los editores de revistas tengan acceso a herramientas de selección de manuscritos. En este campo en rápida evolución, seguiremos modificando estas recomendaciones a medida que se desarrollen el software y sus aplicaciones.

8.
Radiol. bras ; 56(5): 263-268, Sept.-Oct. 2023. tab, graf
Article in English | LILACS-Express | LILACS | ID: biblio-1529323

ABSTRACT

Abstract Objective: To validate a deep learning (DL) model for bone age estimation in individuals in the city of São Paulo, comparing it with the Greulich and Pyle method. Materials and Methods: This was a cross-sectional study of hand and wrist radiographs obtained for the determination of bone age. The manual analysis was performed by an experienced radiologist. The model used was based on a convolutional neural network that placed third in the 2017 Radiological Society of North America challenge. The mean absolute error (MAE) and the root-mean-square error (RMSE) were calculated for the model versus the radiologist, with comparisons by sex, race, and age. Results: The sample comprised 714 examinations. There was a correlation between the two methods, with a coefficient of determination of 0.94. The MAE of the predictions was 7.68 months, and the RMSE was 10.27 months. There were no statistically significant differences between sexes or among races (p > 0.05). The algorithm overestimated bone age in younger individuals (p = 0.001). Conclusion: Our DL algorithm demonstrated potential for estimating bone age in individuals in the city of São Paulo, regardless of sex and race. However, improvements are needed, particularly in relation to its use in younger patients.


Resumo Objetivo: Validar em indivíduos paulistas um modelo de aprendizado profundo (deep learning - DL) para estimativa da idade óssea, comparando-o com o método de Greulich e Pyle. Materiais e Métodos: Estudo transversal com radiografias de mão e punho para idade óssea. A análise manual foi feita por um radiologista experiente. Foi usado um modelo baseado em uma rede neural convolucional que ficou em terceiro lugar no desafio de 2017 da Radiological Society of North America. Calcularam-se o erro médio absoluto (mean absolute error - MAE) e a raiz do erro médio quadrado (root mean-square error - RMSE) do modelo contra o radiologista, com comparações entre sexo, etnia e idade. Resultados: A amostra compreendia 714 exames. Houve correlação entre ambos os métodos com coeficiente de determinação de 0,94. O MAE das predições foi 7,68 meses e a RMSE foi 10,27 meses. Não houve diferenças estatisticamente significantes entre sexos ou raças (p > 0,05). O algoritmo superestimou a idade óssea nos mais jovens (p = 0,001). Conclusão: O nosso algoritmo de DL demonstrou potencial para estimar a idade óssea em indivíduos paulistas, independentemente do sexo e da raça. Entretanto, há necessidade de aprimoramentos, particularmente em pacientes mais jovens.

9.
Medisur ; 21(4)ago. 2023.
Article in Spanish | LILACS-Express | LILACS | ID: biblio-1514578

ABSTRACT

Fundamento: la autonomía permite a los estudiantes pensar por sí mismos, con sentido crítico e independencia, tener en cuenta diferentes puntos de vista y actuar en correspondencia con ellos. Constituye un indicador necesario en el estudio de las habilidades de aprender a aprender. Objetivo: caracterizar la autonomía como indicador de las habilidades de aprender a aprender en estudiantes de medicina. Métodos: se empleó un diseño mixto de investigación del tipo explicativo secuencial. La investigación se realizó de octubre de 2021 a marzo de 2022 en la Universidad de Ciencias Médicas de Cienfuegos. La muestra no probabilística, intencionada, quedó constituida por 255 estudiantes del primer año de la carrera de Medicina. Para la recolección de información se utilizó el cuestionario que evalúa el nivel de formación de las habilidades de aprender a aprender, observaciones a actividades docentes y grupos focales. Resultados: la autonomía está presente en el 45,4 % de los estudiantes, según cuestionario. En los grupos focales algunos estudiantes reconocen presentar insuficiencias en algunos indicadores de la autonomía, lo que se corresponde con los datos obtenidos en las observaciones a las actividades docentes. Conclusiones: la autonomía como indicador clave de las habilidades de aprender a aprender en los estudiantes del primer año de la Universidad de Ciencias Médicas de Cienfuegos se caracterizó por una baja expresión en los procesos de aprendizaje de los estudiantes de medicina.


Background: autonomy allows students to think for themselves, critically and independently, take into account different points of view and act accordingly. It constitutes a necessary indicator in the study of learning-to-learn skills. Objective: to characterize autonomy as an indicator of learning-to-learn skills in medical students. Methods: a mixed research design of the sequential explanatory type was used. The research was carried out from October 2021 to March 2022 at the Cienfuegos University of Medical Sciences. The intentional, non-probabilistic sample was made up of 255 Medicine first-year students. The questionnaire that evaluates the learning to learn skills training level, observations of teaching activities and focus groups were used to collect information. Results: autonomy is present in 45.4% of the students, according to the questionnaire. In the focus groups, some students acknowledge presenting deficiencies in some autonomy indicators, which corresponds to the data obtained in the observations of teaching activities. Conclusions: autonomy as a learning to learn skills key indicator in the Cienfuegos Medical Sciences University first-year students, was characterized by a low expression in the medical students' learning processes.

10.
Indian J Ophthalmol ; 2023 Aug; 71(8): 3039-3045
Article | IMSEAR | ID: sea-225176

ABSTRACT

Purpose: To analyze the efficacy of a deep learning (DL)?based artificial intelligence (AI)?based algorithm in detecting the presence of diabetic retinopathy (DR) and glaucoma suspect as compared to the diagnosis by specialists secondarily to explore whether the use of this algorithm can reduce the cross?referral in three clinical settings: a diabetologist clinic, retina clinic, and glaucoma clinic. Methods: This is a prospective observational study. Patients between 35 and 65 years of age were recruited from glaucoma and retina clinics at a tertiary eye care hospital and a physician’s clinic. Non?mydriatic fundus photography was performed according to the disease?specific protocols. These images were graded by the AI system and specialist graders and comparatively analyzed. Results: Out of 1085 patients, 362 were seen at glaucoma clinics, 341 were seen at retina clinics, and 382 were seen at physician clinics. The kappa agreement between AI and the glaucoma grader was 85% [95% confidence interval (CI): 77.55–92.45%], and retina grading had 91.90% (95% CI: 87.78–96.02%). The retina grader from the glaucoma clinic had 85% agreement, and the glaucoma grader from the retina clinic had 73% agreement. The sensitivity and specificity of AI glaucoma grading were 79.37% (95% CI: 67.30–88.53%) and 99.45 (95% CI: 98.03–99.93), respectively; DR grading had 83.33% (95 CI: 51.59–97.91) and 98.86 (95% CI: 97.35–99.63). The cross?referral accuracy of DR and glaucoma was 89.57% and 95.43%, respectively. Conclusion: DL?based AI systems showed high sensitivity and specificity in both patients with DR and glaucoma; also, there was a good agreement between the specialist graders and the AI system

11.
Indian Pediatr ; 2023 Jul; 60(7): 561-569
Article | IMSEAR | ID: sea-225442

ABSTRACT

Background: The emergence of artificial intelligence (AI) tools such as ChatGPT and Bard is disrupting a broad swathe of fields, including medicine. In pediatric medicine, AI is also increasingly being used across multiple subspecialties. However, the practical application of AI still faces a number of key challenges. Consequently, there is a requirement for a concise overview of the roles of AI across the multiple domains of pediatric medicine, which the current study seeks to address. Aim: To systematically assess the challenges, opportunities, and explainability of AI in pediatric medicine. Methodology: A systematic search was carried out on peer-reviewed databases, PubMed Central, Europe PubMed Central, and grey literature using search terms related to machine learning (ML) and AI for the years 2016 to 2022 in the English language. A total of 210 articles were retrieved that were screened with PRISMA for abstract, year, language, context, and proximal relevance to research aims. A thematic analysis was carried out to extract findings from the included studies. Results: Twenty articles were selected for data abstraction and analysis, with three consistent themes emerging from these articles. In particular, eleven articles address the current state-of-the-art application of AI in diagnosing and predicting health conditions such as behavioral and mental health, cancer, syndromic and metabolic diseases. Five articles highlight the specific challenges of AI deployment in pediatric medicines: data security, handling, authentication, and validation. Four articles set out future opportunities for AI to be adapted: the incorporation of Big Data, cloud computing, precision medicine, and clinical decision support systems. These studies collectively critically evaluate the potential of AI in overcoming current barriers to adoption. Conclusion: AI is proving disruptive within pediatric medicine and is presently associated with challenges, opportunities, and the need for explainability. AI should be viewed as a tool to enhance and support clinical decision-making rather than a substitute for human judgement and expertise. Future research should consequently focus on obtaining comprehensive data to ensure the generalizability of research findings.

12.
Article | IMSEAR | ID: sea-218822

ABSTRACT

Modern cloud computing platforms are having trouble keeping up with the enormous volume of data flow generated by crowdsourcing and the intense computational requirements posed by conventional deep learning applications. Reduced resource consumption can be achieved by edge computing. The goal of the healthcare system is to offer a dependable and well-planned solution to improve societal health. Patients will be more satisfied with their care as a result of doctors taking their medical histories into account when creating healthcare systems and providing care. As a result, the healthcare sector is getting increasingly competitive. Healthcare systems are expanding significantly, which raises issues such massive data volume, reaction time, latency, and security susceptibility. Thus, as a well- known distributed architecture, fog computing could assist in solving

13.
Rev. bras. oftalmol ; 82: e0045, 2023. tab
Article in English | LILACS-Express | LILACS | ID: biblio-1515078

ABSTRACT

ABSTRACT Currently the "pandemic" of diabetes mellitus is noted. The incidence and prevalence of diabetes and diabetic retinopathy, the most common microvascular complications of diabetes, are exponentially growing due to increased life expectancy in many parts of the world. The increasing number of people suffering from diabetic retinopathy not only highlights medical issues, but also an economic burden, representing a medical and social challenge. It is extremely important to identify a disease as soon as possible and successfully treat it. Technological progress results in developing Artificial Intelligence systems capable of detecting diabetic retinopathy. Current screening will be cost effectively based on the use of advanced digital technologies, in particular teleretinal screening systems. At present, we may consider teleophthalmology and Artificial Intelligence with automatic analysis of fundus photos as a Millennium-minded impactful tool for increasing discoverability and manageability of diabetic retinopathy, especially in filling the gap of inaccessibility to hard-to-reach areas, which enforces highly professionally effective time- and cost-saving care everywhere to provide the best possible care for the patients.


RESUMO Atualmente, observa-se a "pandemia" do diabetes mellitus. A incidência e a prevalência do diabetes e da retinopatia diabética, as complicações microvasculares mais comuns do diabetes, estão crescendo exponencialmente devido ao aumento da expectativa de vida em muitas partes do mundo. O número cada vez maior de pessoas que sofrem de retinopatia diabética não apenas destaca problemas médicos, mas também um ônus econômico, representando um desafio médico e social. É extremamente importante identificar uma doença o mais rápido possível e tratá-la com sucesso. O progresso tecnológico resulta no desenvolvimento de sistemas de Inteligência Artificial capazes de detectar a retinopatia diabética. A triagem atual será econômica com base no uso de tecnologias digitais avançadas, em especial os sistemas de triagem telerretiniana. No momento, podemos considerar a teleoftalmologia e a Inteligência Artificial com análise automática de fotos de fundo de olho como uma ferramenta de impacto do milênio para aumentar a capacidade de descoberta e de manejo da retinopatia diabética, especialmente para preencher a lacuna da inacessibilidade a áreas de difícil acesso, o que impõe um atendimento altamente profissional e eficaz, com economia de tempo e de custos, em todos os lugares, para oferecer o melhor atendimento possível aos pacientes.

14.
Acta Academiae Medicinae Sinicae ; (6): 416-421, 2023.
Article in Chinese | WPRIM | ID: wpr-981285

ABSTRACT

Objective To evaluate the impact of deep learning reconstruction algorithm on the image quality of head and neck CT angiography (CTA) at 100 kVp. Methods CT scanning was performed at 100 kVp for the 37 patients who underwent head and neck CTA in PUMC Hospital from March to April in 2021.Four sets of images were reconstructed by three-dimensional adaptive iterative dose reduction (AIDR 3D) and advanced intelligent Clear-IQ engine (AiCE) (low,medium,and high intensity algorithms),respectively.The average CT value,standard deviation (SD),signal-to-noise ratio (SNR),and contrast-to-noise ratio (CNR) of the region of interest in the transverse section image were calculated.Furthermore,the four sets of sagittal maximum intensity projection images of the anterior cerebral artery were scored (1 point:poor,5 points:excellent). Results The SNR and CNR showed differences in the images reconstructed by AiCE (low,medium,and high intensity) and AIDR 3D (all P<0.01).The quality scores of the image reconstructed by AiCE (low,medium,and high intensity) and AIDR 3D were 4.78±0.41,4.92±0.27,4.97±0.16,and 3.92±0.27,respectively,which showed statistically significant differences (all P<0.001). Conclusion AiCE outperformed AIDR 3D in reconstructing the images of head and neck CTA at 100 kVp,being capable of improving image quality and applicable in clinical examinations.


Subject(s)
Humans , Computed Tomography Angiography/methods , Radiation Dosage , Deep Learning , Radiographic Image Interpretation, Computer-Assisted/methods , Signal-To-Noise Ratio , Algorithms
15.
Acta Academiae Medicinae Sinicae ; (6): 273-279, 2023.
Article in Chinese | WPRIM | ID: wpr-981263

ABSTRACT

Objective To evaluate the accuracy of different convolutional neural networks (CNN),representative deep learning models,in the differential diagnosis of ameloblastoma and odontogenic keratocyst,and subsequently compare the diagnosis results between models and oral radiologists. Methods A total of 1000 digital panoramic radiographs were retrospectively collected from the patients with ameloblastoma (500 radiographs) or odontogenic keratocyst (500 radiographs) in the Department of Oral and Maxillofacial Radiology,Peking University School of Stomatology.Eight CNN including ResNet (18,50,101),VGG (16,19),and EfficientNet (b1,b3,b5) were selected to distinguish ameloblastoma from odontogenic keratocyst.Transfer learning was employed to train 800 panoramic radiographs in the training set through 5-fold cross validation,and 200 panoramic radiographs in the test set were used for differential diagnosis.Chi square test was performed for comparing the performance among different CNN.Furthermore,7 oral radiologists (including 2 seniors and 5 juniors) made a diagnosis on the 200 panoramic radiographs in the test set,and the diagnosis results were compared between CNN and oral radiologists. Results The eight neural network models showed the diagnostic accuracy ranging from 82.50% to 87.50%,of which EfficientNet b1 had the highest accuracy of 87.50%.There was no significant difference in the diagnostic accuracy among the CNN models (P=0.998,P=0.905).The average diagnostic accuracy of oral radiologists was (70.30±5.48)%,and there was no statistical difference in the accuracy between senior and junior oral radiologists (P=0.883).The diagnostic accuracy of CNN models was higher than that of oral radiologists (P<0.001). Conclusion Deep learning CNN can realize accurate differential diagnosis between ameloblastoma and odontogenic keratocyst with panoramic radiographs,with higher diagnostic accuracy than oral radiologists.


Subject(s)
Humans , Ameloblastoma/diagnostic imaging , Deep Learning , Diagnosis, Differential , Radiography, Panoramic , Retrospective Studies , Odontogenic Cysts/diagnostic imaging , Odontogenic Tumors
16.
West China Journal of Stomatology ; (6): 218-224, 2023.
Article in English | WPRIM | ID: wpr-981115

ABSTRACT

OBJECTIVES@#This study aims to predict the risk of deep caries exposure in radiographic images based on the convolutional neural network model, compare the prediction results of the network model with those of senior dentists, evaluate the performance of the model for teaching and training stomatological students and young dentists, and assist dentists to clarify treatment plans and conduct good doctor-patient communication before surgery.@*METHODS@#A total of 206 cases of pulpitis caused by deep caries were selected from the Department of Stomatological Hospital of Tianjin Medical University from 2019 to 2022. According to the inclusion and exclusion criteria, 104 cases of pulpitis were exposed during the decaying preparation period and 102 cases of pulpitis were not exposed. The 206 radiographic images collected were randomly divided into three groups according to the proportion: 126 radiographic images in the training set, 40 radiographic images in the validation set, and 40 radiographic images in the test set. Three convolutional neural networks, visual geometry group network (VGG), residual network (ResNet), and dense convolutional network (DenseNet) were selected to analyze the rules of the radiographic images in the training set. The radiographic images of the validation set were used to adjust the super parameters of the network. Finally, 40 radiographic images of the test set were used to evaluate the performance of the three network models. A senior dentist specializing in dental pulp was selected to predict whether the deep caries of 40 radiographic images in the test set were exposed. The gold standard is whether the pulp is exposed after decaying the prepared hole during the clinical operation. The prediction effect of the three network models (VGG, ResNet, and DenseNet) and the senior dentist on the pulp exposure of 40 radiographic images in the test set were compared using receiver operating characteristic (ROC) curve, area under the ROC curve (AUC), accuracy, sensitivity, specificity, positive predictive value, negative predictive value, and F1 score to select the best network model.@*RESULTS@#The best network model was DenseNet model, with AUC of 0.97. The AUC values of the ResNet model, VGG model, and the senior dentist were 0.89, 0.78, and 0.87, respectively. Accuracy was not statistically different between the senior dentist (0.850) and the DenseNet model (0.850)(P>0.05). Kappa consistency test showed moderate reliability (Kappa=0.6>0.4, P<0.05).@*CONCLUSIONS@#Among the three convolutional neural network models, the DenseNet model has the best predictive effect on whether deep caries are exposed in imaging. The predictive effect of this model is equivalent to the level of senior dentists specializing in dental pulp.


Subject(s)
Humans , Deep Learning , Neural Networks, Computer , Pulpitis/diagnostic imaging , Reproducibility of Results , ROC Curve , Random Allocation
17.
Journal of Forensic Medicine ; (6): 66-71, 2023.
Article in English | WPRIM | ID: wpr-984182

ABSTRACT

Bone development shows certain regularity with age. The regularity can be used to infer age and serve many fields such as justice, medicine, archaeology, etc. As a non-invasive evaluation method of the epiphyseal development stage, MRI is widely used in living age estimation. In recent years, the rapid development of machine learning has significantly improved the effectiveness and reliability of living age estimation, which is one of the main development directions of current research. This paper summarizes the analysis methods of age estimation by knee joint MRI, introduces the current research trends, and future application trend.


Subject(s)
Epiphyses/diagnostic imaging , Age Determination by Skeleton/methods , Reproducibility of Results , Magnetic Resonance Imaging/methods , Knee Joint/diagnostic imaging
18.
Chinese Journal of Radiation Oncology ; (6): 422-429, 2023.
Article in Chinese | WPRIM | ID: wpr-993209

ABSTRACT

Objective:To investigate the role of three-dimensional dose distribution-based deep learning model in predicting distant metastasis of head and neck cancer.Methods:Radiotherapy and clinical follow-up data of 237 patients with head and neck cancer undergoing intensity-modulated radiotherapy (IMRT) from 4 different institutions were collected. Among them, 131 patients from HGJ and CHUS institutions were used as the training set, 65 patients from CHUM institution as the validation set, and 41 patients from HMR institution as the test set. Three-dimensional dose distribution and GTV contours of 131 patients in the training set were input into the DM-DOSE model for training and then validated with validation set data. Finally, the independent test set data were used for evaluation. The evaluation content included the area under receiver operating characteristic curve (AUC), balanced accuracy, sensitivity, specificity, concordance index and Kaplan-Meier survival curve analysis.Results:In terms of prognostic prediction of distant metastasis of head and neck cancer, the DM-DOSE model based on three-dimensional dose distribution and GTV contours achieved the optimal prognostic prediction performance, with an AUC of 0.924, and could significantly distinguish patients with high and low risk of distant metastasis (log-rank test, P<0.001). Conclusion:Three-dimensional dose distribution has good predictive value for distant metastasis in head and neck cancer patients treated with IMRT, and the constructed prediction model can effectively predict distant metastasis.

19.
Chinese Journal of Radiation Oncology ; (6): 319-324, 2023.
Article in Chinese | WPRIM | ID: wpr-993194

ABSTRACT

Objective:To develop a multi-scale fusion and attention mechanism based image automatic segmentation method of organs at risk (OAR) from head and neck carcinoma radiotherapy.Methods:We proposed a new OAR segmentation method for medical images of heads and necks based on the U-Net convolution neural network. Spatial and channel squeeze excitation (csSE) attention block were combined with the U-Net, aiming to enhance the feature expression ability. We also proposed a multi-scale block in the U-Net encoding stage to supplement characteristic information. Dice similarity coefficient (DSC) and 95% Hausdorff distance (HD) were used as evaluation criteria for deep learning performance.Results:The segmentation of 22 OAR in the head and neck was performed according to the medical image computing computer assisted intervention (MICCAI) StructSeg2019 dataset. The proposed method improved the average segmentation accuracy by 3%-6% compared with existing methods. The average DSC in the segmentation of 22 OAR in the head and neck was 78.90% and the average 95%HD was 6.23 mm.Conclusion:Automatic segmentation of OAR from the head and neck CT using multi-scale fusion and attention mechanism achieves high segmentation accuracy, which is promising for enhancing the accuracy and efficiency of radiotherapy in clinical practice.

20.
Chinese Journal of Radiation Oncology ; (6): 42-47, 2023.
Article in Chinese | WPRIM | ID: wpr-993148

ABSTRACT

Objective:To investigate the pseudo-CT generation from cone beam CT (CBCT) by a deep learning method for the clinical need of adaptive radiotherapy.Methods:CBCT data from 74 prostate cancer patients collected by Varian On-Board Imager and their simulated positioning CT images were used for this study. The deformable registration was implemented by MIM software. And the data were randomly divided into the training set ( n=59) and test set ( n=15). U-net, Pix2PixGAN and CycleGAN were employed to learn the mapping from CBCT to simulated positioning CT. The evaluation indexes included mean absolute error (MAE), structural similarity index (SSIM) and peak signal to noise ratio (PSNR), with the deformed CT chosen as the reference. In addition, the quality of image was analyzed separately, including soft tissue resolution, image noise and artifacts, etc. Results:The MAE of images generated by U-net, Pix2PixGAN and CycleGAN were (29.4±16.1) HU, (37.1±14.4) HU and (34.3±17.3) HU, respectively. In terms of image quality, the images generated by U-net and Pix2PixGAN had excessive blur, resulting in image distortion; while the images generated by CycleGAN retained the CBCT image structure and improved the image quality.Conclusion:CycleGAN is able to effectively improve the quality of CBCT images, and has potential to be used in adaptive radiotherapy.

SELECTION OF CITATIONS
SEARCH DETAIL